Multiple Objective Nonatomic Markov Decision Processes with Total Reward Criteria
نویسندگان
چکیده
منابع مشابه
Markov Decision Processes with Arbitrary Reward Processes
We consider a learning problem where the decision maker interacts with a standard Markov decision process, with the exception that the reward functions vary arbitrarily over time. We show that, against every possible realization of the reward process, the agent can perform as well—in hindsight—as every stationary policy. This generalizes the classical no-regret result for repeated games. Specif...
متن کاملSplitting Randomized Stationary Policies in Total-Reward Markov Decision Processes
This paper studies a discrete-time total-reward Markov decision process (MDP) with a given initial state distribution. A (randomized) stationary policy can be split on a given set of states if the occupancy measure of this policy can be expressed as a convex combination of the occupancy measures of stationary policies, each selecting deterministic actions on the given set and coinciding with th...
متن کاملDecision Processes with Total-cost Criteria'
By a decision process is meant a pair (X, r), where X is an arbitrary set (the state space), and r associates to each point x in X an arbitrary nonempty collection of discrete probability measures (actions) on X. In a decision process with nonnegative costs depending on the current state, the action taken, and the following state, there is always available a Markov strategy which uniformly (nea...
متن کاملAverage-Reward Decentralized Markov Decision Processes
Formal analysis of decentralized decision making has become a thriving research area in recent years, producing a number of multi-agent extensions of Markov decision processes. While much of the work has focused on optimizing discounted cumulative reward, optimizing average reward is sometimes a more suitable criterion. We formalize a class of such problems and analyze its characteristics, show...
متن کاملBounded Parameter Markov Decision Processes with Average Reward Criterion
Bounded parameter Markov Decision Processes (BMDPs) address the issue of dealing with uncertainty in the parameters of a Markov Decision Process (MDP). Unlike the case of an MDP, the notion of an optimal policy for a BMDP is not entirely straightforward. We consider two notions of optimality based on optimistic and pessimistic criteria. These have been analyzed for discounted BMDPs. Here we pro...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Mathematical Analysis and Applications
سال: 2000
ISSN: 0022-247X
DOI: 10.1006/jmaa.2000.6819